Module - 2: Analog and Digital Meters
On completion of this module, you will be able to:
- Describe the construction and working principle of a given instrument.
- Determine resolution, sensitivity, and accuracy of the given instrument.
- Convert the PMMC instrument into a DC ammeter for the given range.
- Convert the PMMC instrument into DC voltmeter for the given range.
- Describe the working of the given type of ohmmeter and AC voltmeter.
Instruments
- An instrument is a device which is used to determine the magnitude or value of the quantity to be measured.
- An instrument may be defined as a machine or system which is designed to maintain functional relationship between prescribed properties of physical variables and could include means of communication to human observer.
- The measuring quantity can be voltage, current, power and energy etc.
- Generally, instruments are classified in to two categories:-
1. Absolute instrument
2. Secondary instrument
Absolute Instrument
- An absolute instrument gives the quantity to be measured in terms of instrument constants (instrument parameters) and its deflection
- This instrument is rarely used, because each time the value of the measuring quantities varies.
- So, we have to calculate the magnitude of the measuring quantity, analytically which is time consuming.
- These types of instruments are suitable for laboratory use. Example: Tangent galvanometer –measures current in terms of the tangent of the angle of deflection produced by the current, radius and no. of turn of the galvanometer
Secondary instrument
- This instrument determines the value of the quantity to be measured directly.
- Generally, these instruments are calibrated by comparing with another standard before putting into use.
- Examples of such instruments are voltmeter, ammeter and wattmeter etc. Practically secondary instruments are suitable for measurement.
- Classification of Secondary Instruments - Secondary instruments can be classified into three types
1. Indicating type
2. Recording Type
3. Integrating type
Precision
- If an instrument indicates the same value repeatedly when it is used to measure the same quantity under same circumstances for any number of times, then we can say that the instrument has high precision
Sensitivity
- The ratio of change in output, ΔAout of an instrument for a given change in the input, ΔAin that is to be measured is called sensitivity, S. Mathematically it can be represented as :–
- The term sensitivity signifies the smallest change in the measurable input that is required for an instrument to respond.
- If the calibration curve is linear, then the sensitivity of the instrument will be a constant and it is equal to slope of the calibration curve.
- If the calibration curve is non-linear, then the sensitivity of the instrument will not be a constant and it will vary with respect to the input
Resolution
- If the output of an instrument will change only when there is a specific increment of the input, then that increment of the input is called Resolution.
- That means, the instrument is capable of measuring the input effectively, when there is a resolution of the input.
Repeatability
- A measure of how well the output returns to a given value when the same precise input is applied several times. Or The ability of an instrument to reproduce a certain set of reading within a given accuracy.
Range or span
- The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure.
- For example, for a temperature measuring instrument the input range may be 100-500 o C and the output range may be 4-20 mA.
Linearity
- Linearity is actually a measure of nonlinearity of the instrument. When we talk about sensitivity, we assume that the input/output characteristic of the instrument to be approximately linear.
- But in practice, it is normally nonlinear, as shown in Fig.1. The linearity is defined as the maximum deviation from the linear characteristics as a percentage of the full scale output. Thus:–
Dynamic Characteristics
- The characteristics of the instruments, which are used to measure the quantities or parameters that vary very quickly with respect to time are called dynamic characteristics.
- Following are the list of dynamic characteristics:
- Speed of Response
- Dynamic Error
- Fidelity
- Lag
Speed of Response
- The speed at which the instrument responds whenever there is any change in the quantity to be measured is called speed of response. It indicates how fast the instrument is.
Lag
- The amount of delay present in the response of an instrument whenever there is a change in the quantity to be measured is called measuring lag. It is also simply called lag.
Dynamic Error
- The difference between the true value, At of the quantity that varies with respect to time and the indicated value of an instrument, Ai is known as dynamic error, ed.
Fidelity
- The degree to which an instrument indicates changes in the measured quantity without any dynamic error is known as Fidelity
Types of error
- The static error is defined earlier as the difference between the true value of the variable and the value indicated by the instrument. The static error may arise due to number of reasons.
- The static errors are classified as: -
- Gross errors
- Systematic errors
- Random errors
•
•
•
•
•
Gross errors
- The gross errors mainly occur due to carelessness or lack of experience of a human being. These cover human mistakes in readings, recordings and calculating results.
- These errors also occur due to incorrect adjustments of instruments. These errors cannot be treated mathematically.
- These errors are also called personal errors. Some gross errors are easily detected while others are very difficult to detect.
Systematic errors
- The systematic errors are mainly resulting due to the shortcomings of the instrument and the characteristics of the material used in the instrument, such as defective or worn parts, ageing effects, environmental effects, etc.
- A constant uniform deviation of the operation of an instrument is known as a systematic error. There are three types of systematic errors as
- Instrumental errors
- Environmental errors
- Observational errors
Random errors
- Some errors still result, though the systematic and instrumental errors are reduced or at least accounted for.
- The causes of such errors are unknown and hence, the errors are called random errors.
- These errors cannot be determined in the ordinary process of taking the measurements
Absolute and relative errors
- When the error is specified in terms of an absolute quantity and not as a percentage, then it is called an absolute error. Thus the voltage of 10 ± 0.5 V indicated ± 0.5 V as an absolute error.
- When the error is expressed as a percentage or as a fraction of the total quantity to be measured, then it is called relative error..
Loading effect
- While selecting a meter for a particular measurement, the sensitivity rating IS very important. A low sensitive meter may give the accurate reading in low resistance circuit but will produce totally inaccurate reading in high resistance circuit.
- The voltmeter is always connected across the two points between which the potential difference is to be measured.
- If it is connected across a low resistance then as voltmeter resistance is high, most of the current will pass through a low resistance and will produce the voltage drop which will be nothing but the true reading.
- But if the voltmeter is connected across the high resistance then due to two high resistances in parallel, the current will divide almost equally through the two paths. Thus the meter will record the voltage drop across the high resistance which will be much lower than the true reading.
- Thus, the low sensitivity instrument when used in high resistance circuit 'gives a lower than the true reading. This is called loading effect of the voltmeters. It is mainly cc1l1sed due to low sensitivity instruments.
Calibration
- Instrument calibration is one of the primary processes used to maintain instrument accuracy. Calibration is the process of configuring an instrument to provide a result for a sample within an acceptable range.
- So, Calibration is the process of adjusting and verifying the accuracy of a measuring instrument or system, such as an electronic device or sensor, to ensure that it provides the correct readings or outputs within the specified tolerance levels.
- If any variation is found, then the instrument is calibrated so that it can give exact reading and values. It is common for any instrument to lose its calibration after a long period of usage.
- After the process of calibration, the instrument is good to use again.
Need of calibration
- A crucial measurement
- If the instrument has undergone adverse condition and cannot give the right reading
- When the output does not match the standard instrument
- Drastic change in weather
- Cyclic testing of instruments
Advantages of Calibration
- Calibration is proof that the instrument is working properly.
- Increases the confidence of instrument user.
- Calibration fulfils the requirement of traceability.
- Increases power saving and cost saving.
- Reduced rejection and failure rate, hence gives higher productivity.
- Interchangeability.
- The improved product quality and service quality leading to satisfied customers.
- Increases safety.
Calibration Process
- Calibration Process: The calibration process in electronics generally involves the following steps:
- Preparation: This step involves ensuring that the device to be calibrated is properly cleaned and in good working condition and that all necessary tools and reference standards are available.
- Connection: The device to be calibrated is connected to the reference standard and any necessary test equipment is set up.
- Measurement: The device is then measured using the reference standard, and the readings are compared to the known values of the reference standard..
- Adjustment: If necessary, the device is adjusted to bring its readings into alignment with the reference standard. This may involve adjusting internal electronics or physical components or making changes to the device’s software or firmware.
- Documentation: The results of the calibration are documented, including the readings of the device before and after calibration, the reference standard used, and any adjustments made to the device.
- Verification: The device is then re-measured to verify that it is providing accurate and consistent readings, and to ensure that the calibration process was successful.
- Repeat: If necessary, the calibration process may be repeated several times to ensure that the device provides accurate readings.
- This is a general overview of the calibration process. It may vary depending on the type of device being calibrated and the level of accuracy required for the application.